185 research outputs found

    Optimal Data Collection For Informative Rankings Expose Well-Connected Graphs

    Get PDF
    Given a graph where vertices represent alternatives and arcs represent pairwise comparison data, the statistical ranking problem is to find a potential function, defined on the vertices, such that the gradient of the potential function agrees with the pairwise comparisons. Our goal in this paper is to develop a method for collecting data for which the least squares estimator for the ranking problem has maximal Fisher information. Our approach, based on experimental design, is to view data collection as a bi-level optimization problem where the inner problem is the ranking problem and the outer problem is to identify data which maximizes the informativeness of the ranking. Under certain assumptions, the data collection problem decouples, reducing to a problem of finding multigraphs with large algebraic connectivity. This reduction of the data collection problem to graph-theoretic questions is one of the primary contributions of this work. As an application, we study the Yahoo! Movie user rating dataset and demonstrate that the addition of a small number of well-chosen pairwise comparisons can significantly increase the Fisher informativeness of the ranking. As another application, we study the 2011-12 NCAA football schedule and propose schedules with the same number of games which are significantly more informative. Using spectral clustering methods to identify highly-connected communities within the division, we argue that the NCAA could improve its notoriously poor rankings by simply scheduling more out-of-conference games.Comment: 31 pages, 10 figures, 3 table

    Learned SVD: solving inverse problems via hybrid autoencoding

    Get PDF
    Our world is full of physics-driven data where effective mappings between data manifolds are desired. There is an increasing demand for understanding combined model-based and data-driven methods. We propose a nonlinear, learned singular value decomposition (L-SVD), which combines autoencoders that simultaneously learn and connect latent codes for desired signals and given measurements. We provide a convergence analysis for a specifically structured L-SVD that acts as a regularisation method. In a more general setting, we investigate the topic of model reduction via data dimensionality reduction to obtain a regularised inversion. We present a promising direction for solving inverse problems in cases where the underlying physics are not fully understood or have very complex behaviour. We show that the building blocks of learned inversion maps can be obtained automatically, with improved performance upon classical methods and better interpretability than black-box methods

    A Partially Learned Algorithm for Joint Photoacoustic Reconstruction and Segmentation

    Get PDF
    In an inhomogeneously illuminated photoacoustic image, important information like vascular geometry is not readily available when only the initial pressure is reconstructed. To obtain the desired information, algorithms for image segmentation are often applied as a post-processing step. In this work, we propose to jointly acquire the photoacoustic reconstruction and segmentation, by modifying a recently developed partially learned algorithm based on a convolutional neural network. We investigate the stability of the algorithm against changes in initial pressures and photoacoustic system settings. These insights are used to develop an algorithm that is robust to input and system settings. Our approach can easily be applied to other imaging modalities and can be modified to perform other high-level tasks different from segmentation. The method is validated on challenging synthetic and experimental photoacoustic tomography data in limited angle and limited view scenarios. It is computationally less expensive than classical iterative methods and enables higher quality reconstructions and segmentations than state-of-the-art learned and non-learned methods.Comment: "copyright 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

    Deep Learning-Based Carotid Artery Vessel Wall Segmentation in Black-Blood MRI Using Anatomical Priors

    Get PDF
    Carotid artery vessel wall thickness measurement is an essential step in the monitoring of patients with atherosclerosis. This requires accurate segmentation of the vessel wall, i.e., the region between an artery's lumen and outer wall, in black-blood magnetic resonance (MR) images. Commonly used convolutional neural networks (CNNs) for semantic segmentation are suboptimal for this task as their use does not guarantee a contiguous ring-shaped segmentation. Instead, in this work, we cast vessel wall segmentation as a multi-task regression problem in a polar coordinate system. For each carotid artery in each axial image slice, we aim to simultaneously find two non-intersecting nested contours that together delineate the vessel wall. CNNs applied to this problem enable an inductive bias that guarantees ring-shaped vessel walls. Moreover, we identify a problem-specific training data augmentation technique that substantially affects segmentation performance. We apply our method to segmentation of the internal and external carotid artery wall, and achieve top-ranking quantitative results in a public challenge, i.e., a median Dice similarity coefficient of 0.813 for the vessel wall and median Hausdorff distances of 0.552 mm and 0.776 mm for lumen and outer wall, respectively. Moreover, we show how the method improves over a conventional semantic segmentation approach. These results show that it is feasible to automatically obtain anatomically plausible segmentations of the carotid vessel wall with high accuracy.Comment: SPIE Medical Imaging 202

    4D imaging in tomography and optical nanoscopy

    Full text link
    Diese Dissertation gehört zu den Gebieten mathematische Bildverarbeitung und inverse Probleme. Ein inverses Problem ist die Aufgabe, Modellparameter anhand von gemessenen Daten zu berechnen. Solche Probleme treten in zahlreichen Anwendungen in Wissenschaft und Technik auf, z.B. in medizinischer Bildgebung, Biophysik oder Astronomie. Wir betrachten Rekonstruktionsprobleme mit Poisson Rauschen in der Tomographie und optischen Nanoskopie. Bei letzterer gilt es Bilder ausgehend von verzerrten und verrauschten Messungen zu rekonstruieren, wohingegen in der Positronen-Emissions-Tomographie die Aufgabe in der Visualisierung physiologischer Prozesse eines Patienten besteht. Standardmethoden zur 3D Bildrekonstruktion berücksichtigen keine zeitabhängigen Informationen oder Dynamik, z.B. Herzschlag oder Atmung in der Tomographie oder Zellmigration in der Mikroskopie. Diese Dissertation behandelt Modelle, Analyse und effiziente Algorithmen für 3D und 4D zeitabhängige inverse Probleme. This thesis contributes to the field of mathematical image processing and inverse problems. An inverse problem is a task, where the values of some model parameters must be computed from observed data. Such problems arise in a wide variety of applications in sciences and engineering, such as medical imaging, biophysics or astronomy. We mainly consider reconstruction problems with Poisson noise in tomography and optical nanoscopy. In the latter case, the task is to reconstruct images from blurred and noisy measurements, whereas in positron emission tomography the task is to visualize physiological processes of a patient. In 3D static image reconstruction standard methods do not incorporate time-dependent information or dynamics, e.g. heart beat or breathing in tomography or cell motion in microscopy. This thesis is a treatise on models, analysis and efficient algorithms to solve 3D and 4D time-dependent inverse problems
    • …
    corecore